Traditional multilingual neural machine translation (MNMT) uses a single model to translate all directions. However, with the increasing scale of language pairs, simply using a single model for massive MNMT brings new challenges: parameter tension and large computations. In this paper, we revisit multi-way structures by assigning an individual branch for each language (group). Despite being a simple architecture, it is challenging to train de-centralized models due to the lack of constraints to align representations from all languages. We propose a localized training recipe to map different branches into a unified space, resulting in an efficient detachable model, Lego-MT. For a fair comparison, we collect data from OPUS and build the first large-scale open-source translation benchmark covering 7 language-centric data, each containing 445 language pairs. Experiments show that Lego-MT (1.2B) brings gains of more than 4 BLEU while outperforming M2M-100 (12B) (We will public all training data, models, and checkpoints)
translated by 谷歌翻译
State-of-the-art 3D semantic segmentation models are trained on the off-the-shelf public benchmarks, but they often face the major challenge when these well-trained models are deployed to a new domain. In this paper, we propose an Active-and-Adaptive Segmentation (ADAS) baseline to enhance the weak cross-domain generalization ability of a well-trained 3D segmentation model, and bridge the point distribution gap between domains. Specifically, before the cross-domain adaptation stage begins, ADAS performs an active sampling operation to select a maximally-informative subset from both source and target domains for effective adaptation, reducing the adaptation difficulty under 3D scenarios. Benefiting from the rise of multi-modal 2D-3D datasets, ADAS utilizes a cross-modal attention-based feature fusion module that can extract a representative pair of image features and point features to achieve a bi-directional image-point feature interaction for better safe adaptation. Experimentally, ADAS is verified to be effective in many cross-domain settings including: 1) Unsupervised Domain Adaptation (UDA), which means that all samples from target domain are unlabeled; 2) Unsupervised Few-shot Domain Adaptation (UFDA) which means that only a few unlabeled samples are available in the unlabeled target domain; 3) Active Domain Adaptation (ADA) which means that the selected target samples by ADAS are manually annotated. Their results demonstrate that ADAS achieves a significant accuracy gain by easily coupling ADAS with self-training methods or off-the-shelf UDA works.
translated by 谷歌翻译
Diffusion model, a new generative modelling paradigm, has achieved great success in image, audio, and video generation. However, considering the discrete categorical nature of text, it is not trivial to extend continuous diffusion models to natural language, and text diffusion models are less studied. Sequence-to-sequence text generation is one of the essential natural language processing topics. In this work, we apply diffusion models to approach sequence-to-sequence text generation, and explore whether the superiority generation performance of diffusion model can transfer to natural language domain. We propose SeqDiffuSeq, a text diffusion model for sequence-to-sequence generation. SeqDiffuSeq uses an encoder-decoder Transformers architecture to model denoising function. In order to improve generation quality, SeqDiffuSeq combines the self-conditioning technique and a newly proposed adaptive noise schedule technique. The adaptive noise schedule has the difficulty of denoising evenly distributed across time steps, and considers exclusive noise schedules for tokens at different positional order. Experiment results illustrate the good performance on sequence-to-sequence generation in terms of text quality and inference time.
translated by 谷歌翻译
Language models with the Transformers structure have shown great performance in natural language processing. However, there still poses problems when fine-tuning pre-trained language models on downstream tasks, such as over-fitting or representation collapse. In this work, we propose HyPe, a simple yet effective fine-tuning technique to alleviate such problems by perturbing hidden representations of Transformers layers. Unlike previous works that only add noise to inputs or parameters, we argue that the hidden representations of Transformers layers convey more diverse and meaningful language information. Therefore, making the Transformers layers more robust to hidden representation perturbations can further benefit the fine-tuning of PLMs en bloc. We conduct extensive experiments and analyses on GLUE and other natural language inference datasets. Results demonstrate that HyPe outperforms vanilla fine-tuning and enhances generalization of hidden representations from different layers. In addition, HyPe acquires negligible computational overheads, and is better than and compatible with previous state-of-the-art fine-tuning techniques.
translated by 谷歌翻译
Recently, a surge of high-quality 3D-aware GANs have been proposed, which leverage the generative power of neural rendering. It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space, allowing free-view consistent synthesis and editing, referred as 3D GAN inversion. Although with the facial prior preserved in pre-trained 3D GANs, reconstructing a 3D portrait with only one monocular image is still an ill-pose problem. The straightforward application of 2D GAN inversion methods focuses on texture similarity only while ignoring the correctness of 3D geometry shapes. It may raise geometry collapse effects, especially when reconstructing a side face under an extreme pose. Besides, the synthetic results in novel views are prone to be blurry. In this work, we propose a novel method to promote 3D GAN inversion by introducing facial symmetry prior. We design a pipeline and constraints to make full use of the pseudo auxiliary view obtained via image flipping, which helps obtain a robust and reasonable geometry shape during the inversion process. To enhance texture fidelity in unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide extra supervision. We design constraints aimed at filtering out conflict areas for optimization in asymmetric situations. Comprehensive quantitative and qualitative evaluations on image reconstruction and editing demonstrate the superiority of our method.
translated by 谷歌翻译
The ultimate goal of continuous sign language recognition(CSLR) is to facilitate the communication between special people and normal people, which requires a certain degree of real-time and deploy-ability of the model. However, in the previous research on CSLR, little attention has been paid to the real-time and deploy-ability. In order to improve the real-time and deploy-ability of the model, this paper proposes a zero parameter, zero computation temporal superposition crossover module(TSCM), and combines it with 2D convolution to form a "TSCM+2D convolution" hybrid convolution, which enables 2D convolution to have strong spatial-temporal modelling capability with zero parameter increase and lower deployment cost compared with other spatial-temporal convolutions. The overall CSLR model based on TSCM is built on the improved ResBlockT network in this paper. The hybrid convolution of "TSCM+2D convolution" is applied to the ResBlock of the ResNet network to form the new ResBlockT, and random gradient stop and multi-level CTC loss are introduced to train the model, which reduces the final recognition WER while reducing the training memory usage, and extends the ResNet network from image classification task to video recognition task. In addition, this study is the first in CSLR to use only 2D convolution extraction of sign language video temporal-spatial features for end-to-end learning for recognition. Experiments on two large-scale continuous sign language datasets demonstrate the effectiveness of the proposed method and achieve highly competitive results.
translated by 谷歌翻译
在这项工作中,我们探讨了用于语义分割知识蒸馏的数据增强。为了避免过度适合教师网络中的噪音,大量培训示例对于知识蒸馏至关重要。 Imagelevel论证技术(例如翻转,翻译或旋转)在先前的知识蒸馏框架中广泛使用。受到功能空间上语义方向的最新进展的启发,我们建议在功能空间中包括以进行有效蒸馏的功能。具体而言,给定语义方向,可以在功能空间中为学生获得无限数量的增强。此外,分析表明,可以通过最大程度地减少增强损失的上限来同时优化这些增强。基于观察结果,开发了一种用于语义分割的知识蒸馏的新算法。对四个语义分割基准测试的广泛实验表明,所提出的方法可以提高当前知识蒸馏方法的性能而没有任何明显的开销。代码可在以下网址获得:https://github.com/jianlong-yuan/fakd。
translated by 谷歌翻译
在过去的几年中,基于自我注意力的变压器模型一直在主导许多计算机视觉任务。它们的出色模型质量在很大程度上取决于标记过多的图像数据集。为了减少对大型标记数据集的依赖,基于重建的掩盖自动编码器正在获得流行,这些自动编码器从未标记的图像中学习了高质量的可转移表示形式。出于同样的目的,最近弱监督的图像预处理方法探索了图像随附的文本字幕的语言监督。在这项工作中,我们提出了对语言辅助代表的预读图像,称为米兰。我们的预处理目标不是预测原始像素或低级别的特征,而是用使用字幕监督获得的大量语义信号来重建图像特征。此外,为了适应我们的重建目标,我们提出了更有效的促使解码器体系结构和语义意识到的掩码采样机制,从而进一步推进了预告片模型的传输性能。实验结果表明,米兰的精度比以前的工作更高。当掩盖的自动编码器在ImagEnet-1K数据集上进行了预估计并以224x224的输入分辨率进行了填充时,米兰在VITB/16上的前1位准确性达到了85.4%,使以前的先前最先前的艺术品达到1%。在下游的语义分割任务中,米兰在ADE20K数据集上使用VIT-B/16骨架达到52.7 MIOU,表现优于先前的蒙版预读结果4分。
translated by 谷歌翻译
在域概括(DG)中取得了长足的进步,该域旨在从多个通知的源域到未知目标域学习可推广的模型。但是,在许多实际情况下,获得足够的源数据集的注释可能非常昂贵。为了摆脱域的概括和注释成本之间的困境,在本文中,我们介绍了一个名为标签效率的域概括(LEDG)的新任务,以使用标签限制的源域来实现模型概括。为了解决这一具有挑战性的任务,我们提出了一个称为协作探索和概括(CEG)的新颖框架,该框架共同优化了主动探索和半监督的概括。具体而言,在主动探索中,在避免信息差异和冗余的同时探索阶级和域可区分性,我们查询具有类别不确定性,域代表性和信息多样性的总体排名最高的样品标签。在半监督的概括中,我们设计了基于混音的内部和域间知识增强,以扩大域知识并概括域的不变性。我们以协作方式统一主动探索和半监督概括,并促进它们之间的相互增强,从而以有限的注释来增强模型的概括。广泛的实验表明,CEG产生了出色的概括性能。特别是,与以前的DG方法相比,CEG甚至只能使用5%的数据注释预算来实现竞争结果,并在PACS数据集中具有完全标记的数据。
translated by 谷歌翻译
针对以下问题:基于深度学习的时空层次连续语言识别模型具有大量计算,这限制了模型的实时应用,本文提出了一个时间上的超级分辨率网络(TSRNET)。将数据重构为密集的特征序列,以减少整体模型计算,同时将最终识别精度损失保持在最小值。连续的手语识别模型(CSLR)通过TSRNET主要由三个部分组成:帧级特征提取,时间序列特征提取和TSRNET,其中TSRNET位于框架级特征提取和时间序列的特征提取之间,主要包括在内两个分支:详细描述符和粗糙描述符。稀疏的框架级特征通过两个设计的分支获得的功能融合,作为重建的密集帧级特征序列,连接师时间分类(CTC)损失用于训练和优化,在时间序列特征提取部分之后。为了更好地恢复语义级别的信息,通过本文提出的自我生成的对抗训练方法对整体模型进行了训练,以降低模型错误率。训练方法将TSRNET视为发电机,框架级处理部分和时间处理部分是鉴别器。此外,为了统一不同基准下模型准确性损失的评估标准,本文提出了单词错误率偏差(WERD),该单词错误率(WERD)在估计的单词错误率(WER)和由The获得的参考WER之间的错误率。重建的帧级特征序列和完整的原始帧级特征序列为WERD。在两个大规模手语数据集上进行的实验证明了该模型的有效性。
translated by 谷歌翻译